Overview: The goal of the following project is to programmatically retrieve and analyze all of Elon Musk’s tweets from the Twitter API. We compile over 10,000 of Musk’s tweets into a comprehensive dataset and perform sentiment analysis for each tweet object.
| 18.92 K | 118 | 102.8 M |
| Tweets | Following | Followers |
| user_id | screen_name | followers_count | statuses_count | friends_count | account_created_at | verified |
|---|---|---|---|---|---|---|
| x44196397 | elonmusk | 95589999 | 18134 | 114 | 2009-06-02 | TRUE |
Elon Musk, the man who transformed the electric car industry and accelerated the world’s space exploration efforts, provides the public an unfiltered look into his eccentric mind through his online Twitter presence. With over 77 million followers, Musk has one of the most followed Twitter accounts receiving thousands of shares, likes, and comments on each of his tweets. Musk often shares tweets daily, covering a wide range of topics from serious technical aspects to lighthearted memes. Some of his tweets have a big impact, making headlines, stirring up controversy, and even sometimes moving the needle on everything from Tesla’s stock price to cryptocurrency markets. The companies that Musk runs are also hugely influential and disruptive.
To gain a better understanding of Musk’s Twitter profile, we
programmatically retrieve and analyze all of Elon Musk’s tweets using
the rtweet package, which provides a convenient interface
between the Twitter API and R code. We compile over 10,000 of his tweets
into a comprehensive dataset. Once we scraped a decade worth of Elon
Musk tweets, we analyzed the data and sorted the information to answer
the following questions.
(tweet types) What is the ratio of mentions, replies,
retweets, quotes, and organic tweets?(user engagement) How does Elon Musk engage with other
twitter users?(twitter activity) Are there any trends of when Musk
tweets?(topics) What are Elon Musk’s most tweeted topics?The first question examines Musk’s tweet distribution, which we answer by sorting all tweets into categories (based on tweet types) and ranking each category based on the volume of tweets. Analyzing Musk’s nature of engagement, we answer the second question by unpacking tweets containing conversations with and directed to other users. To answer the third question analyzing Musk’s Twitter activity, we sort Musk’s feed into different timeframes during which the tweets were posted. Lastly showing which topics dominated his feed each year, we answer the fourth question by analyzing which hashtags and words Musk uses the most in his tweets.
Using the Twitter API, we compile over 10,000 of Musk’s tweets into a
comprehensive dataset. The dataset consists of Elon Musk’s most recent
Tweets during 2015-2022, stored in RDS format, where each tweet is in
its own separate row object. All Tweets are collected, parsed, and
plotted using rtweet in R. In total, there are more than
thousands of tweets in this dataset, including retweets and replies. All
objects are to go into a single database.
| 1 | status_id | 14 | hashtags | 27 | quoted_followers_count | 40 | retweet_location |
| 2 | created_at | 15 | symbols | 28 | quoted_location | 41 | retweet_description |
| 3 | user_id | 16 | media_expanded_url | 29 | quoted_description | 42 | retweet_verified |
| 4 | screen_name | 17 | media_type | 30 | quoted_verified | 43 | name |
| 5 | text | 18 | mentions_screen_name | 31 | retweet_status_id | 44 | location |
| 6 | source | 19 | quoted_status_id | 32 | retweet_text | 45 | description |
| 7 | reply_to_screen_name | 20 | quoted_text | 33 | retweet_created_at | 46 | followers_count |
| 8 | is_quote | 21 | quoted_created_at | 34 | retweet_source | 47 | friends_count |
| 9 | is_retweet | 22 | quoted_source | 35 | retweet_favorite_count | 48 | statuses_count |
| 10 | favorite_count | 23 | quoted_favorite_count | 36 | retweet_retweet_count | 49 | account_created_at |
| 11 | retweet_count | 24 | quoted_retweet_count | 37 | retweet_user_id | 50 | verified |
| 12 | quote_count | 25 | quoted_user_id | 38 | retweet_screen_name | ||
| 13 | reply_count | 26 | quoted_screen_name | 39 | retweet_followers_count |
To obtain the data of a Twitter account, we must first sign up for a developer account and create an application that has the necessary credentials required to access the Twitter API. After setting up the Twitter application, we want to load the rtweet package in R and set the application authentication keys generated in the portal. Next, we use the following command to create and authenticate a Twitter token, allowing access to Twitter data.
library(rtweet) # load rtweet package
twitter_token <- create_token(app = "twitter_app", consumer_key = "api_key", consumer_secret = "api_secret", access_token = "access_token", access_secret = "access_secret")
use_oauth_token(twitter_token) # authenticate via web browser
Searching Twitter’s full archive API with the rtweet package, we run
the search_fullarchive() function to access and extract the
complete information from all of the historical tweets related to a
particular user. It’s possible to then save these extracted tweets to an
RDS file. The example code below captures Elon Musk’s tweets from
January 01, 2010, to May 28, 2022.
df <- search_fullarchive(q = "from:elonmusk", n = 10000, env_name = enviroment_name, fromDate = "201001010000", toDate = "202205280000")
| created_at | screen_name | text | favorite_count | retweet_count | quote_count | reply_count | is_quote | is_retweet |
|---|---|---|---|---|---|---|---|---|
| 2022-08-04 19:23:21 | elonmusk | Livestream the 2022 Annual Tesla Shareholder Meeting today at 4:30pm CT https://t.co/KttPYLPcxi https://t.co/eF9kRVfFOr | 0 | 0 | 0 | 0 | FALSE | TRUE |
| 2022-08-04 18:04:12 | elonmusk | (garyblack00?) I had more kids in Q2 than they made cars! | 58736 | 3593 | 1115 | 3173 | FALSE | FALSE |
| 2022-08-04 16:24:08 | elonmusk | (dogeofficialceo?) (chicago_glenn?) (mySA?) (happydad?) Stone of Destiny | 2935 | 178 | 30 | 436 | FALSE | FALSE |
Optionally, we can load the Twitter API data into a data management system, such as Azure Databricks, and write queries to run a SQL job and retrieve the data.
Azure Databricks is an analytics platform based on
Microsoft Azure cloud services, enabling the latest versions of Apache
Spark: an open-source engine providing large-scale APIs in
general-purpose programming languages such as Scala, Python, and R.
Specifically, Databricks provides a cloud-based interactive workspace
with fully managed Spark clusters, allowing users to quickly execute
Spark code in a easy-to-use environment. From the Azure portal, we
create and launch a Databricks workspace, establish a Spark cluster, and
configure a notebook on the cluster. In the notebook, we use
SparkR to read the dataset into a Spark DataFrame and
run a SQL job to query the data.
When dealing with Twitter data, one of the first steps is to distinguish organic (original, user-written) tweets from other tweet types: retweets, replies, mentions, and quotes. Analyzing the ratio of mentions, replies, retweets, quotes, and organic tweets provides a general overview of the user account type.
The different type of tweets that exist are general tweets, mentions, replies, retweets, and quotes. General tweets are original twitter posts containing text, photos, a GIF, and/or video, but do not include any mentions, replies, retweets, or quotes. Both mentions and replies are types of tweet containing other account usernames; though, replies are sent in direct response to another user’s tweet. Lastly, retweets and quotes are both re-postings of another person’s tweet, although quotes allow users to post another person’s tweet with their own added comment.
As a first step, we distinguish between organic tweets, retweets and replies. For this, we identify the tweet type from the data collected by the Twitter API contained in certain columns, including “is_retweet”, “reply_to_status_id”, and others. The following shows how to remove the retweets and replies from the data to keep only the organic/general tweets.
# Remove retweets and replies
dfGeneral <- df[df$is_retweet == FALSE,] %>% subset(is.na(reply_to_status_id))
Similarly, we want to create a different dataset for each data type.
dfMention <- subset(df, !is.na(df$mentions_user_id))
dfReply <- subset(df, !is.na(df$reply_to_status_id))
dfRtweet <- df[df$is_retweet == TRUE,]
dfQuote <- df[df$is_quote == TRUE,]
In the above, we subset the tweets into five datasets either
containing only general tweets, mentions, replies, retweets, or quotes.
We then count the number of observations for each dataset using the
nrow() function and store the information in a separate
dataframe containing the tweet type and its respective count.
Now, for example, we can show information for each of Musk’s retweets and query the data to obtain his most frequently retweeted users. To identify the most frequently retweeted users, we use tidyr tools to unnest, count, and sort each user from Musk’s retweets.
To track Elon Musk’s engagement with people on Twitter, we want to look into tweets containing conversations with and directed to other users. We begin by unpacking information for each of Elon Musk’s tweets that mention another person’s username. Specifically, mentions are a type of tweet containing other account usernames, preceded by the “@” symbol.
| created_at | mentions_user_id | mentions_screen_name | text |
|---|---|---|---|
| 2022-07-27 01:40:54 | x924966648922157056 x593977467 x42555311 x16831353 | chicago_glenn michaelsiconolf KirstenGrind EmilyGlazer | (chicago_glenn?) (michaelsiconolf?) (KirstenGrind?) (EmilyGlazer?) 99% of journalism is reading someone else’s story on the Internet, changing it up a little & pressing send |
| 2022-08-01 17:28:24 | x1492510878323081216 | spideycyp_155 | (spideycyp_155?) So much water under the bridge since then |
| 2022-08-04 16:19:18 | x924966648922157056 x9830752 x38672574 | chicago_glenn mySA happydad | (chicago_glenn?) (mySA?) (happydad?) Yeah, pretty good. I like the name. |
As shown above, there exist tweets containing multiple mentioned usernames within the body of the text, all grouped together in a single row. So now we must manipulate the data so that each mentioned user for a tweet forms its own row, which allows us to count the total number of times Musk mentioned a unique user.
dfMentions %>% tibble(user = str_extract_all(text, "@\\w+")) %>%
tidyr::unnest_longer(user) %>% dplyr::count(user, sort = TRUE)
The above command uses the str_extract_all() function to
extract the mentioned users for each tweet and
unnest_longer() to transform the nested lists into tidy
rows so that each row contains only one user. Lastly, we count the total
number of observations for each unique user.
SELECT mentions_screen_name, COUNT(*) AS n
FROM dfMentions
WHERE mentions_screen_name != 'NA'
GROUP BY mentions_screen_name
SORT BY n DESC;
| screen_name | n | mention_time | mention_time2 |
|---|---|---|---|
| spacex | 1323 | ||
| tesla | 1050 | ||
| erdayastronaut | 705 | ||
| ppathole | 474 | ||
| flcnhvy | 470 | ||
| teslaownerssv | 459 | ||
| wholemarsblog | 384 | ||
| teslarati | 351 | ||
| nasa | 264 | ||
| cleantechnica | 217 |
Linking conversations together, a reply is a type of tweet sent in direct response to another user’s tweet. Similar to mentions, replies allow users to direct tweets toward other twitter users and interact in conversations. Following the same general procedure above, we obtain the following results.
Here, we provide an overall overview of the activity of the account
by examining when Musk posts his tweets. This includes analyzing the
frequency of tweets by timeframes including year, month, weekday, hour,
and more. Parsing the information from the created_at
column, we extract the timestamp to display the year, month, day, and
hour associated with the publish date for each tweet.
df$created_at <- to_timestamp(df$created_at)
df$year <- year(df$created_at)
df$month <- date_format(to_date(df$created_at), "MMMM")
df$weekday <- date_format(to_date(df$created_at), "EEEE")
| created_at | year | month | weekday | time |
|---|---|---|---|---|
| 2022-02-22 06:42:47 | 2022 | February | Tuesday | morning |
| 2022-02-06 01:38:11 | 2022 | February | Sunday | night |
| 2021-09-01 01:52:22 | 2021 | September | Wednesday | night |
| 2020-11-25 20:38:25 | 2020 | November | Wednesday | evening |
| 2020-06-10 18:42:22 | 2020 | June | Wednesday | evening |
| 2019-03-03 23:12:18 | 2019 | March | Sunday | night |
The above data allows us to explore Musk’s Twitter activity, ranging from the frequency of his tweets over years to exactly which days of the week or hours of the day have more or less activity. As a result, we see that Musk is the most active on Thursdays and Fridays, mostly tweeting at night.
We can go deeper and look at exactly the time and day of the week in which Musk has the most activity posting tweets with a detailed histogram. Consistent with what was previously obtained, we see that there is greater activity on Thursdays at 6 pm.
Continuing to analyze Musk’s habits on Twitter, we want to know which topics dominate Musk’s Twitter feed. To get the topics and themes of his content, we examine the most frequent hashtags and words used in his tweets and how many tweets are associated with these hashtags and words.
Most Used Hashtags:
First we extract hashtags, all words preceded with a #
character, from the content of the Tweets data. The following command
unpacks the hashtags column into an array of strings,
followed by counting how many unique hashtags used by Elon Musk.
hashtag <- df$text %>% str_extract_all("#[A-Za-z0-9_]+")
hashtag_word <- unlist(hashtag)
hashtag_word <- tolower(hashtag_word)
hashtag_word <- gsub("[[:punct:]ー]", "", hashtag_word)
Most Used Words:
Next, we look at which words Musk mentions the most in his tweets. Figuring out the most common words in Elon Musk’s tweets involves text mining tasks. The first step is to clean up the text from our dataset by using lowercase and removing punctuation, usernames, links, etc. We then use R tidy tools to convert the text to tidy formats and remove stop words.
# Regex for parsing tweets
replace_reg <- "https?://[^\\s]+|&|<|>|\bRT\\b|^<"
# Clean text
words <- dfGeneral %>%
dplyr::mutate(
text = str_remove_all(text, replace_reg),
text = str_remove_all(text, "[[:punct:]]"),
text = str_remove_all(text, "[[:digit:]]")) %>%
# Split into words
unnest_tokens(word, text, token = "tweets") %>%
# Remove stop words
anti_join(stop_words, by = "word")
In the above command, the pattern matching function
str_remove_all() removes unwanted text, and the
unnest_tokens() function splits the text of each tweet into
tokens, using a one-word-per-row format. We then use the
str_detect() function to filter out words by removing stop
words, unicode characters, and whitespace.
Above, we used the unnest_tokens function to tokenize by
word; however, we can also use these functions to tokenize into
consecutive sequences of words, called n-grams. We do this by adding the
option token = "ngrams" and setting \(n\) to the number of words. Setting n to
\(2\) allows us to examine pairs of two
consecutive words, often called bigrams.
bigrams <- dfGeneral %>%
dplyr::mutate(text = str_replace_all(text, replace_reg, "")) %>%
# split into word pairs
unnest_tokens(bigram, text, token = "ngrams", n = 2) %>%
separate(bigram, into = c("first","second"), sep = " ", remove = FALSE) %>%
# remove stop words
anti_join(stop_words, by = c("first" = "word")) %>%
anti_join(stop_words, by = c("second" = "word")) %>%
filter(str_detect(first, "[a-z]") & str_detect(second, "[a-z]"))
| bigram | n |
|---|---|
| tesla model | 35 |
| boring company | 28 |
| falcon heavy | 26 |
| space station | 25 |
| cape canaveral | 22 |
| climate change | 19 |
| tesla team | 19 |
| tesla owners | 18 |
| static fire | 16 |
| giga berlin | 15 |
| rocket landing | 15 |
| upper stage | 15 |
Here we use the syuzhet R package to iterate over a
vector of strings consisting of the text from all of Elon Musk’s tweets
in our dataset. To obtain the vector of tweet text, the plain_tweets()
function from the rtweet package is used to clean up the
tweets character vector to cleaned up, plain text. We then pass this
vector to the get_sentiment() function, which consequently returns the
sentiment values based on the custom sentiment dictionary developed from
a collection of human coded sentences.
round_time <- function(x, secs)
as.POSIXct(hms::round_hms(x, secs))
sent_scores <- function(x)
syuzhet::get_sentiment(plain_tweets(x)) - .5
df.sentiment <- gfg_data %>%
dplyr::mutate(days = round_time(created_at, 60 * 60 * 24),
sentiment = sent_scores(text)) %>%
dplyr::group_by(days) %>%
dplyr::summarise(sentiment = sum(sentiment, na.rm = TRUE))
Extending the above sentiment analysis, the next step is to
understand the opinion or emotion in the text. First, we must clean the
text from our dataset so that it’s in a tidy format. We accomplish this
using the R function gsub() to replace unwanted text and
the get_nrc_sentiment() function to get the emotions and
valences from the NRC sentiment dictionary for each word from all of
Musk’s tweet.
R Code:
txt <- c("rt|RT", "http\\w+", "<.*?>", "@\\w+", "[[:punct:]]", "\r?\n|\r", "[[:digit:]]", "[ |\t]{2,}", "^ ", " $")
cleanTweet <- as.vector(df$text)
cleanTweet <- grep::gsub(txt, "", cleanTweet)
textSentiment <- syuzhet::get_nrc_sentiment(cleanTweet)
nrc_sentiment <- cbind(df, textSentiment) %>%
dplyr::select(created_at, anger, anticipation, disgust, fear,
joy, sadness, surprise, trust, negative, positive)
In the above command, the gsub function replaces all occurrences of
the given patterns and the get_nrc_sentiment function calculates the
presence of eight different emotions and their corresponding valence.
The resulting columns include the eight emotions disgust,
fear, joy, sadness,
surprise, trust and their respective
positive or negative valence.
In this article, I aimed to show how to extract and analyse tweets using the free-to-use programming software R. I hope you found this guide helpful to build your own Twitter Analytics Report that includes:
Showing which tweets worked best and which didn’t The ratio of organic tweets/replies/retweets, the time of tweet publication and the platforms from which tweets are published. These are all insights regarding the tweeting behaviour. The most frequent words used in the tweets, hashtags, from which accounts most retweets originate and a sentiment analysis capturing the tone of the tweets. These are all insights on the content of the tweets.